Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Group activity recognition based on partitioned attention mechanism and interactive position relationship
Bo LIU, Linbo QING, Zhengyong WANG, Mei LIU, Xue JIANG
Journal of Computer Applications    2022, 42 (7): 2052-2057.   DOI: 10.11772/j.issn.1001-9081.2021060904
Abstract274)   HTML15)    PDF (2504KB)(104)       Save

Group activity recognition is a challenging task in complex scenes, which involves the interaction and the relative spatial position relationship of a group of people in the scene. The current group activity recognition methods either lack the fine design or do not take full advantage of interactive features among individuals. Therefore, a network framework based on partitioned attention mechanism and interactive position relationship was proposed, which further considered individual limbs semantic features and explored the relationship between interaction feature similarity and behavior consistency among individuals. Firstly, the original video sequences and optical flow image sequences were used as the input of the network, and a partitioned attention feature module was introduced to refine the limb motion features of individuals. Secondly, the spatial position and interactive distance were taken as individual interaction features. Finally, the individual motion features and spatial position relation features were fused as the features of the group scene undirected graph nodes, and Graph Convolutional Network (GCN) was adopted to further capture the activity interaction in the global scene, thereby recognizing the group activity. Experimental results show that this framework achieves 92.8% and 97.7% recognition accuracy on two group activity recognition datasets (CAD (Collective Activity Dataset) and CAE (Collective Activity Extended Dataset)). Compared with Actor Relationship Graph (ARG) and Confidence Energy Recurrent Network (CERN) on CAD dataset, this framework has the recognition accuracy improved by 1.8 percentage points and 5.6 percentage points respectively. At the same time, the results of ablation experiment show that the proposed algorithm achieves better recognition performance.

Table and Figures | Reference | Related Articles | Metrics
Weakly supervised fine-grained classification method of Alzheimer’s disease based on improved visual geometry group network
Shuang DENG, Xiaohai HE, Linbo QING, Honggang CHEN, Qizhi TENG
Journal of Computer Applications    2022, 42 (1): 302-309.   DOI: 10.11772/j.issn.1001-9081.2021020258
Abstract483)   HTML14)    PDF (868KB)(222)       Save

In order to solve the problems of small difference of Magnetic Resonance Imaging (MRI) images between Alzheimer’s Disease (AD) patients and Normal Control (NC) people and great difficulty in classification of them, a weakly supervised fine-grained classification method for AD based on improved Visual Geometry Group (VGG) network was proposed. In this method, Weakly Supervised Data Augmentation Network (WSDAN) was took as the basic model, which was mainly composed of weakly supervised attention learning module, data augmentation module and bilinear attention pooling module. Firstly, the feature map and the attention map were generated through weakly supervised attention learning network, and the attention map was used to guide the data augmentation. Both the original image and the augmented data were used as the input data for training. Then, point production between the feature map and the attention map was performed by elements via bilinear attention pooling algorithm to obtain the feature matrix. Finally, the feature matrix was used as the input of the linear classification layer. Experimental results of applying WSDAN basic model with VGG19 as feature extraction network on MRI data of AD show that, compared with the WSDAN basic model, the proposed model only with image enhancement has the accuracy, sensitivity and specificity increased by 1.6 percentage points, 0.34 percentage points and 0.12 percentage points respectively; the model only using the improvement of VGG19 network has the accuracy and specificity improved by 0.7 percentage points and 2.82 percentage points respectively; the model combing the two methods above has the accuracy, sensitivity and specificity improved by 2.1 percentage points, 1.91 percentage points and 2.19 percentage points respectively.

Table and Figures | Reference | Related Articles | Metrics